B Lack - Box a Ttacks on D Eep N Eural N Etworks via G
نویسندگان
چکیده
In this paper, we propose novel Gradient Estimation black-box attacks to generate adversarial examples with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial example from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We show that the proposed Gradient Estimation attacks outperform all other black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving attack success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai.
منابع مشابه
Iclr 2018 a Ttention - B Ased G Uided S Tructured S Parsity of D Eep N Eural N Etworks
Network pruning is aimed at imposing sparsity in a neural network architecture by increasing the portion of zero-valued weights for reducing its size regarding energy-efficiency consideration and increasing evaluation speed. In most of the conducted research efforts, the sparsity is enforced for network pruning without any attention to the internal network characteristics such as unbalanced out...
متن کاملH Ardware - Aware E Xponential a Pproximation for D Eep N Eural N Etworks
In this paper, we address the problem of cost-efficient inference for non-linear operations in deep neural networks (DNNs), in particular, the exponential function e in softmax layer of DNNs for object detection. The goal is to minimize the hardware cost in terms of energy and area, while maintaining the application accuracy. To this end, we introduce Piecewise Linear Function (PLF) for approxi...
متن کاملL Ocal E Xplanation M Ethods for D Eep N Eural N Etworks L Ack S Ensitivity to P Arameter V Al - Ues
Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat...
متن کاملIsk L Andscape a Nalysis for U Nder - Standing D Eep N Eural N Etworks
This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network ...
متن کاملQ Uantized B Ack - P Ropagation : T Raining B Ina - Rized N Eural N Etworks with Q Uantized G Ra - Dients
Binarized Neural networks (BNNs) have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained. However, BNNs only binarize the model parameters and activations during propagations. We show there is no inherent difficulty in training BNNs using ”Quantized BackPropagation” (QBP), in which we also quantized the error gradients and i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2018